Goto

Collaborating Authors

 real harm


Misplaced fears of an 'evil' ChatGPT obscure the real harm being done

#artificialintelligence

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft's ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him. The transcript of the conversation, together with Roose's appearance on the paper's The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other "generative AI" tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) – ie human-level intelligence – and therefore posing an existential threat to humanity. Accompanying this hysteria is a new gold rush, as venture capitalists and other investors strive to get in on the action.


The Real Harm of Crisis Text Line's Data Sharing

WIRED

Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using "anonymized" conversation data to power a for-profit machine learning tool for customer support teams. Crisis Text Line's response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. That's the real travesty--when the price of obtaining mental health help in a crisis is becoming grist for a machine learning mill. And it's not just users of CTL who pay; it's everyone who goes looking for help when they need it most.


Scientists want 'Minority Report' pre-crime face recognition AI stopped

#artificialintelligence

Over 1500 researchers across multiple fields have banded together to openly reject the use of technology to predict crime, arguing it would reproduce injustices and cause real harm. The Coaltition for Critical Technology wrote an open letter to Springer Verlag in Germany to express their grave concerns about a newly developed automated facial recognition software that a group of scientistts from Harrisburg Univeristy, Pennsylvania have developed. Springer's Nature Research Book Series intends to publish an article by the Harrisburg scientists named A Deep Neural Network Model to Predict Criminality Using Image Processing. The coalition wants the publication of the study - and others in similar vein - to be rescinded, arguing the paper makes claims that are based unsound scientific premises, research and methods. Developed by a New York Police Department veteran and PhD student Jonathan Korn along with professors Nathaniel Ashby and Roozbeh Sadeghian, the Harrisburg University researchers' software claims 80 per cent accuracy and no racial bias.